Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Joint approach of intent detection and slot filling based on multi-task learning
Aiguo SHANG, Xinjuan ZHU
Journal of Computer Applications    2024, 44 (3): 690-695.   DOI: 10.11772/j.issn.1001-9081.2023040443
Abstract244)   HTML14)    PDF (1281KB)(205)       Save

With the application of pre-trained language models in Natural Language Processing (NLP) tasks, joint modeling of Intent Detection (ID) and Slot Filling (SF) has improved the performance of Spoken Language Understanding (SLU). Existing methods mostly focus on the interaction between intents and slots, neglecting the influence of modeling differential text sequences on SLU tasks. A joint method for Intent Detection and Slot Filling based on Multi-task Learning (IDSFML) was proposed. Firstly, differential texts were constructed using random mask strategy, and a neural network structure combining AutoEncoder and Attention mechanism (AEA) was designed to incorporate the features of differential text sequences into the SLU task. Secondly, a similarity distribution task was designed to make the representations of differential texts and original texts similar. Finally, three tasks of ID, SF and differential text sequence similarity distribution were jointly trained. Experimental results on Airline Travel Information Systems (ATIS) and SNIPS datasets show that, compared with the suboptimal baseline method SASGBC (Self-Attention and Slot-Gated on top of BERT with CRF), IDSFML improves the F1 scores of slot filling by 1.9 and 1.6 percentage points respectively, and improves the accuracy of intent detection by 0.2 and 0.4 percentage points respectively, enhancing the accuracy of spoken language understanding tasks.

Table and Figures | Reference | Related Articles | Metrics